52 research outputs found
Incentive Design and Market Evolution of Mobile User-Provided Networks
An operator-assisted user-provided network (UPN) has the potential to achieve
a low cost ubiquitous Internet connectivity, without significantly increasing
the network infrastructure investment. In this paper, we consider such a
network where the network operator encourages some of her subscribers to
operate as mobile Wi-Fi hotspots (hosts), providing Internet connectivity for
other subscribers (clients). We formulate the interaction between the operator
and mobile users as a two-stage game. In Stage I, the operator determines the
usage-based pricing and quota-based incentive mechanism for the data usage. In
Stage II, the mobile users make their decisions about whether to be a host, or
a client, or not a subscriber at all. We characterize how the users' membership
choices will affect each other's payoffs in Stage II, and how the operator
optimizes her decision in Stage I to maximize her profit. Our theoretical and
numerical results show that the operator's maximum profit increases with the
user density under the proposed hybrid pricing mechanism, and the profit gain
can be up to 50\% in a dense network comparing with a pricing-only approach
with no incentives.Comment: This manuscript serves as the online technical report of the article
published in IEEE Workshop on Smart Data Pricing (SDP), 201
RewardRating: A Mechanism Design Approach to Improve Rating Systems
Nowadays, rating systems play a crucial role in the attraction of customers
for different services. However, as it is difficult to detect a fake rating,
attackers can potentially impact the rating's aggregated score unfairly. This
malicious behavior can negatively affect users and businesses. To overcome this
problem, we take a mechanism-design approach to increase the cost of fake
ratings while providing incentives for honest ratings. Our proposed mechanism
\textit{RewardRating} is inspired by the stock market model in which users can
invest in their ratings for services and receive a reward based on future
ratings. First, we formally model the problem and discuss budget-balanced and
incentive-compatibility specifications. Then, we suggest a profit-sharing
scheme to cover the rating system's requirements. Finally, we analyze the
performance of our proposed mechanism
Federated Learning with Reduced Information Leakage and Computation
Federated learning (FL) is a distributed learning paradigm that allows
multiple decentralized clients to collaboratively learn a common model without
sharing local data. Although local data is not exposed directly, privacy
concerns nonetheless exist as clients' sensitive information can be inferred
from intermediate computations. Moreover, such information leakage accumulates
substantially over time as the same data is repeatedly used during the
iterative learning process. As a result, it can be particularly difficult to
balance the privacy-accuracy trade-off when designing privacy-preserving FL
algorithms. In this paper, we introduce Upcycled-FL, a novel federated learning
framework with first-order approximation applied at every even iteration. Under
this framework, half of the FL updates incur no information leakage and require
much less computation. We first conduct the theoretical analysis on the
convergence (rate) of Upcycled-FL, and then apply perturbation mechanisms to
preserve privacy. Experiments on real-world data show that Upcycled-FL
consistently outperforms existing methods over heterogeneous data, and
significantly improves privacy-accuracy trade-off while reducing 48% of the
training time on average
Improving Fairness and Privacy in Selection Problems
Supervised learning models have been increasingly used for making decisions
about individuals in applications such as hiring, lending, and college
admission. These models may inherit pre-existing biases from training datasets
and discriminate against protected attributes (e.g., race or gender). In
addition to unfairness, privacy concerns also arise when the use of models
reveals sensitive personal information. Among various privacy notions,
differential privacy has become popular in recent years. In this work, we study
the possibility of using a differentially private exponential mechanism as a
post-processing step to improve both fairness and privacy of supervised
learning models. Unlike many existing works, we consider a scenario where a
supervised model is used to select a limited number of applicants as the number
of available positions is limited. This assumption is well-suited for various
scenarios, such as job application and college admission. We use ``equal
opportunity'' as the fairness notion and show that the exponential mechanisms
can make the decision-making process perfectly fair. Moreover, the experiments
on real-world datasets show that the exponential mechanism can improve both
privacy and fairness, with a slight decrease in accuracy compared to the model
without post-processing.Comment: This paper has been accepted for publication in the 35th AAAI
Conference on Artificial Intelligenc
An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift
A common assumption in semi-supervised learning is that the labeled,
unlabeled, and test data are drawn from the same distribution. However, this
assumption is not satisfied in many applications. In many scenarios, the data
is collected sequentially (e.g., healthcare) and the distribution of the data
may change over time often exhibiting so-called covariate shifts. In this
paper, we propose an approach for semi-supervised learning algorithms that is
capable of addressing this issue. Our framework also recovers some popular
methods, including entropy minimization and pseudo-labeling. We provide new
information-theoretical based generalization error upper bounds inspired by our
novel framework. Our bounds are applicable to both general semi-supervised
learning and the covariate-shift scenario. Finally, we show numerically that
our method outperforms previous approaches proposed for semi-supervised
learning under the covariate shift.Comment: Accepted at AISTATS 202
Medium Optimization for Synaptobrevin Production Using Statistical Methods
Background: Botulinum toxin, the most potent biological toxin, has become a powerful therapeutic tool for a growing number of clinical applications. Molecular studies have identified a family of synaptic vesicle-associated membrane proteins (VAMPs, also known as synaptobrevins) which have been implicated in synaptic vesicle docking and fusion with plasma membrane proteins.Materials and Methods: Using the synaptobrevin as a substrate for in vitro assay is the method to detect BoNT activity. We have been working on optimizations of bacterial expression conditions and media for high-level production of synaptobrevin peptide. Statistics-based experimental design was used to investigate the effect of medium components (E. coli strain, peptone, IPTG, yeast extract, ampicillin, and temperature) on synaptobrevin production by E. coli.Results: A 24 fractional factorial design with center points revealed that IPTG and temperature were the most significant factors, whereas the other factors were not important within the levels tested. This purpose was followed by a central composite design to develop a response surface for medium optimization. The optimum medium composition for synaptobrevin production was found to be: IPTG 29 mM, peptone 10 g/L, yeast extract 5 g/L, temperature 23°C and ampicillin 100 mg/L. This medium was projected to produce, theoretically, 115 mg/Lsynaptobrevin.Conclusion: The optimum medium composition synaptobrevin production was found to be: BL21 (E.coli strain), LB medium (peptone 10 g/L, Yeast 5 g/L), Ampicillin (100 mg/L), IPTG (0.29 mg/L) and temperature (23°C)
- …